Creating realistic virtual assets is a time-consuming process: it usually involves an artist designing the object, then spending a lot of effort on tweaking its appearance. Intricate details and certain effects, such as subsurface scattering, elude representation using real-time BRDFs, making it impossible to fully capture the appearance of certain objects. Inspired by the recent progress of neural rendering, we propose an approach for capturing real-world objects in everyday environments faithfully and fast. We use a novel neural representation to reconstruct volumetric effects, such as translucent object parts, and preserve photorealistic object appearance. To support real-time rendering without compromising rendering quality, our model uses a grid of features and a small MLP decoder that is transpiled into efficient shader code with interactive framerates. This leads to a seamless integration of the proposed neural assets with existing mesh environments and objects. Thanks to the use of standard shader code rendering is portable across many existing hardware and software systems.
translated by 谷歌翻译
Neural Radiance Fields (NeRFs) encode the radiance in a scene parameterized by the scene's plenoptic function. This is achieved by using an MLP together with a mapping to a higher-dimensional space, and has been proven to capture scenes with a great level of detail. Naturally, the same parameterization can be used to encode additional properties of the scene, beyond just its radiance. A particularly interesting property in this regard is the semantic decomposition of the scene. We introduce a novel technique for semantic soft decomposition of neural radiance fields (named SSDNeRF) which jointly encodes semantic signals in combination with radiance signals of a scene. Our approach provides a soft decomposition of the scene into semantic parts, enabling us to correctly encode multiple semantic classes blending along the same direction -- an impossible feat for existing methods. Not only does this lead to a detailed, 3D semantic representation of the scene, but we also show that the regularizing effects of the MLP used for encoding help to improve the semantic representation. We show state-of-the-art segmentation and reconstruction results on a dataset of common objects and demonstrate how the proposed approach can be applied for high quality temporally consistent video editing and re-compositing on a dataset of casually captured selfie videos.
translated by 谷歌翻译
The capture and animation of human hair are two of the major challenges in the creation of realistic avatars for the virtual reality. Both problems are highly challenging, because hair has complex geometry and appearance, as well as exhibits challenging motion. In this paper, we present a two-stage approach that models hair independently from the head to address these challenges in a data-driven manner. The first stage, state compression, learns a low-dimensional latent space of 3D hair states containing motion and appearance, via a novel autoencoder-as-a-tracker strategy. To better disentangle the hair and head in appearance learning, we employ multi-view hair segmentation masks in combination with a differentiable volumetric renderer. The second stage learns a novel hair dynamics model that performs temporal hair transfer based on the discovered latent codes. To enforce higher stability while driving our dynamics model, we employ the 3D point-cloud autoencoder from the compression stage for de-noising of the hair state. Our model outperforms the state of the art in novel view synthesis and is capable of creating novel hair animations without having to rely on hair observations as a driving signal.
translated by 谷歌翻译
基于坐标的体积表示有可能从图像中生成光真实的虚拟化身。但是,即使是可能未观察到的新姿势,虚拟化身也需要控制。传统技术(例如LBS)提供了这样的功能;但是,通常需要手工设计的车身模板,3D扫描数据和有限的外观模型。另一方面,神经表示在表示视觉细节方面具有强大的作用,但在变形的动态铰接式参与者方面受到了探索。在本文中,我们提出了TAVA,这是一种基于神经表示形式创建无象光动画体积参与者的方法。我们仅依靠多视图数据和跟踪的骨骼来创建演员的体积模型,该模型可以在给定的新颖姿势的测试时间中进行动画。由于塔瓦不需要身体模板,因此它适用于人类以及其他动物(例如动物)。此外,Tava的设计使其可以恢复准确的密集对应关系,从而使其适合于内容创建和编辑任务。通过广泛的实验,我们证明了所提出的方法可以很好地推广到新颖的姿势以及看不见的观点和展示基本的编辑功能。
translated by 谷歌翻译
我们呈现虚拟弹性物体(VEOS):虚拟物体,不仅看起来像他们的真实同行,而且也表现得像他们一样,即使在进行新颖的互动时也是如此。实现这一挑战:不仅必须捕获对象,包括对它们上的物理力量,然后忠实地重建和呈现,而且还发现和模拟了合理的材料参数。要创建VEOS,我们构建了一个多视图捕获系统,捕获压缩空气流的影响下的物体。建立近期型号动态神经辐射区域的进步,我们重建了物体和相应的变形字段。我们建议使用可差异的基于粒子的模拟器来使用这些变形字段来查找代表性的材料参数,这使我们能够运行新的模拟。为了渲染模拟对象,我们设计了一种用神经辐射场将模拟结果集成的方法。结果方法适用于各种场景:它可以处理由非均匀材料组成的物体,具有非常不同的形状,它可以模拟与其他虚拟对象的交互。我们在各种力字段下使用12个对象的新收集的数据集介绍了我们的结果,这将与社区共享。
translated by 谷歌翻译
用于运动中的人类的新型视图综合是一个具有挑战性的计算机视觉问题,使得诸如自由视视频之类的应用。现有方法通常使用具有多个输入视图,3D监控或预训练模型的复杂设置,这些模型不会概括为新标识。旨在解决这些限制,我们提出了一种新颖的视图综合框架,以从单视图传感器捕获的任何人的看法生成现实渲染,其具有稀疏的RGB-D,类似于低成本深度摄像头,而没有参与者特定的楷模。我们提出了一种架构来学习由基于球体的神经渲染获得的小说视图中的密集功能,并使用全局上下文修复模型创建完整的渲染。此外,增强剂网络利用了整体保真度,即使在原始视图中的遮挡区域中也能够产生细节的清晰渲染。我们展示了我们的方法为单个稀疏RGB-D输入产生高质量的合成和真实人体演员的新颖视图。它概括了看不见的身份,新的姿势,忠实地重建面部表情。我们的方法优于现有人体观测合成方法,并且对不同水平的输入稀疏性具有稳健性。
translated by 谷歌翻译
捕获和渲染寿命状的头发由于其细微的几何结构,复杂的物理相互作用及其非琐碎的视觉外观而特别具有挑战性。灰色是可信的头像的关键部件。在本文中,我们解决了上述问题:1)我们使用一种新的体积发型,这是成千上万的基元提出的。通过构建神经渲染的最新进步,每个原始可以有效地渲染。 2)具有可靠的控制信号,我们呈现了一种在股线水平上跟踪头发的新方法。为了保持计算努力,我们使用引导毛和经典技术将那些扩展到致密的头发罩中。 3)为了更好地强制执行我们模型的时间一致性和泛化能力,我们使用体积射线前导,进一步优化了我们的表示光流的3D场景流。我们的方法不仅可以创建录制的多视图序列的真实渲染,还可以通过提供新的控制信号来为新的头发配置创建渲染。我们将我们的方法与现有的方法进行比较,在视点合成和可驱动动画和实现最先进的结果。
translated by 谷歌翻译
综合照片 - 现实图像和视频是计算机图形的核心,并且是几十年的研究焦点。传统上,使用渲染算法(如光栅化或射线跟踪)生成场景的合成图像,其将几何形状和材料属性的表示为输入。统称,这些输入定义了实际场景和呈现的内容,并且被称为场景表示(其中场景由一个或多个对象组成)。示例场景表示是具有附带纹理的三角形网格(例如,由艺术家创建),点云(例如,来自深度传感器),体积网格(例如,来自CT扫描)或隐式曲面函数(例如,截短的符号距离)字段)。使用可分辨率渲染损耗的观察结果的这种场景表示的重建被称为逆图形或反向渲染。神经渲染密切相关,并将思想与经典计算机图形和机器学习中的思想相结合,以创建用于合成来自真实观察图像的图像的算法。神经渲染是朝向合成照片现实图像和视频内容的目标的跨越。近年来,我们通过数百个出版物显示了这一领域的巨大进展,这些出版物显示了将被动组件注入渲染管道的不同方式。这种最先进的神经渲染进步的报告侧重于将经典渲染原则与学习的3D场景表示结合的方法,通常现在被称为神经场景表示。这些方法的一个关键优势在于它们是通过设计的3D-一致,使诸如新颖的视点合成捕获场景的应用。除了处理静态场景的方法外,我们还涵盖了用于建模非刚性变形对象的神经场景表示...
translated by 谷歌翻译
Figure 1. Given a monocular image sequence, NR-NeRF reconstructs a single canonical neural radiance field to represent geometry and appearance, and a per-time-step deformation field. We can render the scene into a novel spatio-temporal camera trajectory that significantly differs from the input trajectory. NR-NeRF also learns rigidity scores and correspondences without direct supervision on either. We can use the rigidity scores to remove the foreground, we can supersample along the time dimension, and we can exaggerate or dampen motion.
translated by 谷歌翻译
In this paper, we propose ARCH (Animatable Reconstruction of Clothed Humans), a novel end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image. Existing approaches to digitize 3D humans struggle to handle pose variations and recover details. Also, they do not produce models that are animation ready. In contrast, ARCH is a learned pose-aware model that produces detailed 3D rigged full-body human avatars from a single unconstrained RGB image. A Semantic Space and a Semantic Deformation Field are created using a parametric 3D body estimator. They allow the transformation of 2D/3D clothed humans into a canonical space, reducing ambiguities in geometry caused by pose variations and occlusions in training data. Detailed surface geometry and appearance are learned using an implicit function representation with spatial local features. Furthermore, we propose additional per-pixel supervision on the 3D reconstruction using opacity-aware differentiable rendering. Our experiments indicate that ARCH increases the fidelity of the reconstructed humans. We obtain more than 50% lower reconstruction errors for standard metrics compared to state-of-the-art methods on public datasets. We also show numerous qualitative examples of animated, high-quality reconstructed avatars unseen in the literature so far.
translated by 谷歌翻译